perm filename PSYCHO.2[S83,JMC] blob
sn#710676 filedate 1983-05-07 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .<<psycho.2[s83,jmc] Draft for Psychology Today>>
C00019 ENDMK
C⊗;
.<<psycho.2[s83,jmc] Draft for Psychology Today>>
.require "memo.pub[let,jmc]" source;
.cb THE LITTLE THOUGHTS OF THINKING MACHINES
.cb by John McCarthy
Introduction
When we interact with computers and other machines, we often
use language ordinarily used for talking about people. We may say,
%2"It thinks I'm overdrawn, because it doesn't yet know about
the deposit I made yesterday"%1.
...
Philosophers and English teachers criticize this as
⊗anthropomorphism. Indeed anthropomorphism has often led to intellectual
disaster, and people still often make anthropomorphic
mistakes. Nevertheless, as our daily lives involve ever more
sophisticated computers, we will find that ascribing little thoughts to
machines will be increasingly useful in understanding how to get the most
good out of them. We just have to be careful and not get carried away.
We begin with an extremely trivial example from the instructions
for an electric blanket.
%2"Place the control near the bed in a place that is neither hotter nor
colder than the room itself. If the control is placed on a radiator or
radiant heated floors, it will "think" the entire room is hot and will
lower your blanket temperature, making your bed too cold. If the control
is placed on the window sill in a cold draft, it will "think" the entire
room is cold and will heat up your bed so it will be too hot."%1
The example is extreme, because most people don't need the
word "think" in order to understand how a thermostatic control works.
Nevertheless, the blanket manufacturer was probably right in believing
that it would help some users.
Anthropomorphism
Since I advocate some anthropomorphism, I had better explain
about bad and good anthropomorphism.
First of all we have the anthropomorphism of primitive religion.
Primitive religion had a substantial scientific and technological
motivation, i.e. to understand natural phenomena and control them.
In fact this was probably its main motivation, and present
religious motivations like finding a reason for the world or a purpose
for life and a basis for morality probably came later.
Lightning is explained as the hammer of Thor or Zeus.
The rainbow is taken as a sign that God will
destroy the world by fire next time. We have the ant as advice
not to be lazy. We have the rival theories of lightning striking
churches with which Benjamin Franklin's lightning rod had to contend.
One said that it was a sign that God was displeased with the
congregation, and the other said that it represented an attack on
the congregation by evil spirits of the air. The technology
associated with this science involved placating or exorcising the spirits
behind the phenomena whose modification was desired.
This required conjecturing what the gods or spirits knew, believed, wanted,
liked and disliked.
Since physical phenomena are not controlled by spirits, this
science and technology was unsuccessful, but it was only gradually
abandoned as technology
that actually work came along. As bits of technology were
empirically discovered the corresponding magic retreated, but the
wholesale rout of superstition didn't occur until the advent of modern
science. Even then it took a hundred years after Franklin's invention of
1756 before lightning rods fully won out over prayer as a means of
protecting churches.
There is also a non-serious form of anthropomorphism typified by
the so-called Murphy's laws. "If it can go wrong it will. Nature
always sides with the hidden flaw. Bread always falls with the buttered
side down". This seems to be entirely a form of humor. At least no-one,
not even in Los Angeles, has established a church of Murphyism.
It is also common to ascribe personalities to cars, boats and
other machinery. It is hard to say whether anyone takes this seriously.
Anyway I'm not supporting any of these things.
The three stances of Daniel Dennett
Daniel Dennett (Tufts University philosopher) has proposed
three attitudes aimed at understanding a system with which one interacts.
The first attitude he calls the physical stance. We look at the
system
in terms of its structure at various levels of organization.
Its parts have their own properties and they interact physically
in ways that we know about. In principle the analysis can go down
to the atoms and their parts. Modern science expects
that such an analysis would explain everything if only we knew
the structure and the laws of interaction of the parts
and could do the calculations.
The evidence supports this, but we usually must act without knowing
structure or the laws of interaction, and we couldn't do the calculations
if we did.
Often the ⊗design ⊗stance is more helpful than the physical
stance. We analyze something in
terms of the purpose for which it is designed. Dennett's elegant
example is the alarm clock. We can usually figure out what an
alarm clock will do, e.g. when it will go off, without knowing
whether it is made of springs and gears or of integrated circuits.
The user of an alarm clock typically doesn't know much about
its internal structure, and this information wouldn't be of much
use. Notice that if an alarm clock breaks,
its repair requires taking the physical
stance.
The design stance is appropriate not only for machinery but also
for the parts of an organism. It is amusing that we can't attribute
a purpose to the existence of ants, but we can find a purpose for
the glands that emit the substances that other ants follow.
Darwin's theory of natural selection justifies analyzing
the organs and the behavior of organisms in terms of purpose.
Dennett's third is the "intentional stance", and this is
what we will often need for understanding computer programs.
We try to understand
the behavior of a system by ascribing to it beliefs, goals,
intentions, wants, likes and dislikes and other mental qualities.
What mental qualities and processes may be ascribed to machines?
The reason for ascribing mental qualities and mental
processes to machines is the same as for ascribing them to
other people. It helps understand what they will do, how our
actions will affect them and how to compare them with ourselves.
One way to proceed would be to try to define what qualities
are involved in having a mind and then to examine machines to see
if they have minds. This yes-or-no attitude towards mind wouldn't
help, because if we put most of the qualities of the human mind
into our definition, we would conclude that present machines have
no minds. That would end the matter, but then we wouldn't get the
benefit of our existing intuitive psychology in understanding
machines.
Instead we will treat mental qualities piecemeal and try to
determine when it is useful to ascribe each of them.
Returning to the thermostat, we may imagine that it can
have exactly three beliefs. It may believe the room is too cold
or that it is too hot or that it is ok. It has no other beliefs;
for example, it doesn't know it is a thermostat.
I chose the thermostat example precisely because it is possible
to understand its behavior without the concept of belief. Indeed if we only
had thermostats to think about we wouldn't bother with the concept of
belief at all. Analogously if the only sets we ever wanted to think about
had one member we wouldn't bother with the concept of number or even
the concept of set. Indeed most languages treat the number one
differently from larger numbers and zero began to be considered a
number only in the eighteenth century. The point is that the number
system is best understood by including zero as a number even if we
don't have much occasion to count sets without any numbers. Likewise
a theory of mental qualities of machines needs to consider the
thermostat as analogous to the number one - as a simple limiting case.
Zero would be a stone.
Now consider examples wherein we don't have the information
to give an equivalent physical description.
A computer operating system doesn't run my job. Perhaps it
doesn't ⊗know I want to run. Perhaps it ⊗thinks my account is out
of money. Perhaps it ⊗tried and ⊗discovered that my job was too big
to fit in the available memory. Perhaps it has a bug. The first
three are mentalistic (i.e. intentional) assertions about the state
of the system. The system programmer who wrote the operating system
might be able to translate these assertions into physical assertions
about the state of the computer's memory, but a user of the system
will usually not be able to do so, even if he is a system programmer
by profession.
A colleague was billed twice for the same airplane trip
by American Express. In spite of telephoned complaints eliciting
letters of apology, he is still getting letters threatening to close
his account. The billing system is ⊗confused.
This example seems doubtful, because the communication
happens over months. If he were interacting directly with the
American Express computer system from his home terminal or computer
and one inquiry elicited the apology and another the threat, it
would be more obvious that ⊗confused is the right word.
The next example is swiped from the philosopher John Searle.
A person not knowing Chinese memorizes a set of rules for manipulating
Chinese characters. He receives a sentence, applies the rules
and gives back a Chinese sentence. We suppose
that the rules result in an intelligent Chinese conversation about
The Great Helmsman with whoever
gives and receives the text. According to our criteria, the process
consisting of the person carrying out the rules may understand
Chinese. This is analogous to the fact that a computer obeys only
its machine language, but a program running on that computer may
obey programs in a completely different programming language.
Searle doesn't make the distinction between the hardware (the person)
and the process and eventually reaches the conclusion that computers can't
possibly understand Chinese or anything else.
How machines differ from people
The mental qualities we can now build into our machines are not
the same as our own. Even when we understand our own minds better, it
probably won't be advantageous to make machines in our own precise mental
image.
Rationality
The full intentional stance involves regarding a system as rational.
We explain its behavior by rules like: %2It intends to do what it
believes will achieve its goals%1. We then interpret facts about
the system as evidence for various goals, beliefs and intentions.
We don't formally define ⊗goal, ⊗believes or ⊗intends.